Goto

Collaborating Authors

 Child Welfare


Beyond Predictive Algorithms in Child Welfare

Moon, Erina Seh-Young, Saxena, Devansh, Maharaj, Tegan, Guha, Shion

arXiv.org Artificial Intelligence

Caseworkers in the child welfare (CW) sector use predictive decision-making algorithms built on risk assessment (RA) data to guide and support CW decisions. Researchers have highlighted that RAs can contain biased signals which flatten CW case complexities and that the algorithms may benefit from incorporating contextually rich case narratives, i.e. - casenotes written by caseworkers. To investigate this hypothesized improvement, we quantitatively deconstructed two commonly used RAs from a United States CW agency. We trained classifier models to compare the predictive validity of RAs with and without casenote narratives and applied computational text analysis on casenotes to highlight topics uncovered in the casenotes. Our study finds that common risk metrics used to assess families and build CWS predictive risk models (PRMs) are unable to predict discharge outcomes for children who are not reunified with their birth parent(s). We also find that although casenotes cannot predict discharge outcomes, they contain contextual case signals. Given the lack of predictive validity of RA scores and casenotes, we propose moving beyond quantitative risk assessments for public sector algorithms and towards using contextual sources of information such as narratives to study public sociotechnical systems.


Microsoft Struggles to Gain on Google Despite Its Head Start in AI Search

WSJ.com: WSJD - Technology

When Microsoft unveiled an AI-powered version of Bing in February, the company said it could add $2 billion of revenue if the revamped search engine could pry away even a single point of market share from Google.


Toward Improving Predictive Risk Modelling for New Zealand's Child Welfare System Using Clustering Methods

Barmomanesh, Sahar, Miranda-Soberanis, Victor

arXiv.org Artificial Intelligence

The combination of clinical judgement and predictive risk models crucially assist social workers to segregate children at risk of maltreatment and decide when authorities should intervene. Predictive risk modelling to address this matter has been initiated by several governmental welfare authorities worldwide involving administrative data and machine learning algorithms. While previous studies have investigated risk factors relating to child maltreatment, several gaps remain as to understanding how such risk factors interact and whether predictive risk models perform differently for children with different features. By integrating Principal Component Analysis and K-Means clustering, this paper presents initial findings of our work on the identification of such features as well as their potential effect on current risk modelling frameworks. This approach allows examining existent, unidentified yet, clusters of New Zealand (NZ) children reported with care and protection concerns, as well as to analyse their inner structure, and evaluate the performance of prediction models trained cluster wise. We aim to discover the extent of clustering degree required as an early step in the development of predictive risk models for child maltreatment and so enhance the accuracy of such models intended for use by child protection authorities. The results from testing LASSO logistic regression models trained on identified clusters revealed no significant difference in their performance. The models, however, performed slightly better for two clusters including younger children. our results suggest that separate models might need to be developed for children of certain age to gain additional control over the error rates and to improve model accuracy. While results are promising, more evidence is needed to draw definitive conclusions, and further investigation is necessary.


Report: U.S. loses AI leadership to India despite a 6-year head start

#artificialintelligence

Peak's inaugural Decision Intelligence (DI) Maturity Index found that while the U.S. was an early leader in artificial intelligence (AI), India is now the more mature market when it comes to readying their business to adopt AI. While the U.S. was an early leader in AI, with 28% of U.S. businesses adopting the technology over six years ago – compared to 25% in India and 20% in the U.K. – India is the more mature market when it comes to leveraging AI, scoring 64 (out of 100) on Peak's DI maturity scale, while the U.S. charted 52 and the U.K. just 44. What's setting Indian businesses apart is internal communication and education about AI to ensure broad support – 18% of U.S. workers weren't sure if their business used AI, compared to only 2% of Indian workers. Further, 78% of junior staff in India expect AI to have a positive impact on worker well-being over the next five years, compared to 47% of those in the U.S. The report also found that the way businesses structure data teams is crucial to successful AI adoption, with the majority of Indian businesses having data practitioners embedded in commercial teams to support analysis – by contrast most U.S. businesses have a central data team. Moreover, while California is historically seen as the mecca of tech innovation, New York is ahead in AI leadership as it scored an average of 61 out of 100, compared to California, which charted at 58. This is because New York is the top financial services center in the U.S. – an industry that is the second most-mature industry behind IT, computing and technology with a mean maturity score of 56 across all three markets (U.S., U.K. and India).



Sibyl: Explaining Machine Learning Models for High-Stakes Decision Making

#artificialintelligence

As machine learning is applied to an increasingly large number of domains, the need for an effective way to explain its predictions grows apace. In the domain of child welfare screening, machine learning offers a promising method of consolidating the large amount of data that screeners must look at, potentially improving the outcomes for children reported to child welfare departments. Interviews and case-studies suggest that adding an explanation alongside the model prediction may result in better outcomes, but it is not obvious what kind of explanation would be most useful in this context. Through a series of interviews and user studies, we developed Sibyl, a machine learning explanation dashboard specifically designed to aid child welfare screeners' decision making. When testing Sibyl, we evaluated four different explanation types, and based on this evaluation, decided a local feature contribution approach was most useful to screeners.


A Conceptual Framework for Using Machine Learning to Support Child Welfare Decisions

Chor, Ka Ho Brian, Rodolfa, Kit T., Ghani, Rayid

arXiv.org Artificial Intelligence

Human services systems make key decisions that impact individuals in the society. The U.S. child welfare system makes such decisions, from screening-in hotline reports of suspected abuse or neglect for child protective investigations, placing children in foster care, to returning children to permanent home settings. These complex and impactful decisions on children's lives rely on the judgment of child welfare decisionmakers. Child welfare agencies have been exploring ways to support these decisions with empirical, data-informed methods that include machine learning (ML). This paper describes a conceptual framework for ML to support child welfare decisions. The ML framework guides how child welfare agencies might conceptualize a target problem that ML can solve; vet available administrative data for building ML; formulate and develop ML specifications that mirror relevant populations and interventions the agencies are undertaking; deploy, evaluate, and monitor ML as child welfare context, policy, and practice change over time. Ethical considerations, stakeholder engagement, and avoidance of common pitfalls underpin the framework's impact and success. From abstract to concrete, we describe one application of this framework to support a child welfare decision. This ML framework, though child welfare-focused, is generalizable to solving other public policy problems.


Making machine learning more useful to high-stakes decision makers

#artificialintelligence

The U.S. Centers for Disease Control and Prevention estimates that one in seven children in the United States experienced abuse or neglect in the past year. Child protective services agencies around the nation receive a high number of reports each year (about 4.4 million in 2019) of alleged neglect or abuse. With so many cases, some agencies are implementing machine learning models to help child welfare specialists screen cases and determine which to recommend for further investigation. But these models don't do any good if the humans they are intended to help don't understand or trust their outputs. Researchers at MIT and elsewhere launched a research project to identify and tackle machine learning usability challenges in child welfare screening.


Making machine learning more useful to high-stakes decision makers

#artificialintelligence

The U.S. Centers for Disease Control and Prevention estimates that one in seven children in the United States experienced abuse or neglect in the past year. Child protective services agencies around the nation receive a high number of reports each year (about 4.4 million in 2019) of alleged neglect or abuse. With so many cases, some agencies are implementing machine learning models to help child welfare specialists screen cases and determine which to recommend for further investigation. But these models don't do any good if the humans they are intended to help don't understand or trust their outputs. Researchers at MIT and elsewhere launched a research project to identify and tackle machine learning usability challenges in child welfare screening.


Making machine learning more useful to high-stakes decision makers

#artificialintelligence

The U.S. Centers for Disease Control and Prevention estimates that one in seven children in the United States experienced abuse or neglect in the past year. Child protective services agencies around the nation receive a high number of reports each year (about 4.4 million in 2019) of alleged neglect or abuse. With so many cases, some agencies are implementing machine learning models to help child welfare specialists screen cases and determine which to recommend for further investigation. But these models don't do any good if the humans they are intended to help don't understand or trust their outputs. Researchers at MIT and elsewhere launched a research project to identify and tackle machine learning usability challenges in child welfare screening.